Goto

Collaborating Authors

 ai and bias


AI and bias: Machines are less biased than people - Verdict

#artificialintelligence

We hear a lot these days about the potential dangers of "AI bias." If a machine learning system is based upon a data set that is somehow biased by age, gender, race, ethnicity, income, education, geography, or some other factor, the system's outputs will tend to reflect those biases. As the inner workings of ML systems are often impossible for an outsider to fully understand, any such biases can appear to be hidden, making them seem especially sinister. But before getting too alarmed, ask yourself this. Over the long run, which decision-making model is likely to be more objective: human or machine reasoning?


AI and bias - IBM Research - US

#artificialintelligence

AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful. A crucial principle, for both humans and machines, is to avoid bias and therefore prevent discrimination.


What Developers Really Think About AI And Bias

@machinelearnbot

StackOverflow recently asked 100,000 developers to participate in a 30-minute, wide-ranging survey that hits on just this. It's worth noting that the developers themselves were every bit as homogenous as you might expect–93% male, 74% white, 93% heterosexual, about half between the ages of 24 and 35. But their answers are a revealing look into how people writing code think about AI–namely, that they're concerned about its impact on society. This year we saw many designers offering a mea culpa, admitting that they were responsible for dark patterns and other manipulative bits of UI that shape human behavior in bad ways. So what do the surveyed developers think about their own role in creating tech, including AI?